# painting (1.1)
dtp <- fread(paste0(path,"0_doc/3paintings.txt"),skip=4)
dtp <- dtp[NoOfFaces==1 & MediaTime %in% c(33,433,967)]
cols_to_keep <- grep("AU", names(dtp), value = TRUE)
dtp <- dtp[, ..cols_to_keep]
dtp <- dtp[, lapply(.SD, function(x) ifelse(x < 0, 0, ifelse(x>2,2,x)))]
dtp <- data.table(t(dtp),keep.rownames = T)
dtp[,rn := gsub(pattern = " Evidence",replacement = "",x=rn)]
# Validation using CK data (2.1)
dtck <- data.table(read_excel(paste0(path,"0_doc/ACC_CK.xlsx"), sheet = "Sheet2"))
dtck[,Acc.:= paste(round(Acc.,3)*100,"%")]
dtck[,'URL (source: iMotions)' := ""]
# AUs Data (3.1)
dta1 <- fread(paste0(path,"1_data/A1_freq&sum.csv"))
# ML Data (3.2)
dtml <- as.data.table(read_excel("C:/zm/MA/1_data/erg_arbeit.xlsx", sheet = "vp", range = "A2:E32"))
setnames(dtml,"Dataset","vp")
dtml[,vp:= as.character(1:30)]
0 Introduction
This page documents data visualization content involved in my master
thesis (Zhang (2016)). Basically,
highcharter and kableExtra packages in
R language were employed. The presentation of this
RMarkdown was facilitated using template in
rmdformats package.
1 Background
1.1 FACS
The Facial Action Coding System (FACS, Ekman and Friesen (1978)) can be used to describe facial expressions systematically based on activity in atomic units of facial action, the action units (AUs).
Examples
AU Polar Plot
highchart() %>%
hc_chart(type = "line", polar = TRUE) %>%
hc_xAxis(categories = dtp[,rn]) %>%
hc_yAxis(min=-2, max = 2) %>%
hc_add_series(
name = "Woman",
data = dtp[,V2],
pointPlacement = "on",
type = "line",
color = "#b45c3f",
showInLegend = TRUE
)%>%
hc_add_series(
name = "Man",
data = dtp[,V3],
pointPlacement = "on",
type = "line",
color = "#a2b19b",
showInLegend = TRUE
)
1.2 Emotions and AUs
Emotional facial expressions can be assessed through the evaluation of AUs or the combination of different AUs. Although Ekman and Friesen (1978) suggested that specific combinations of AUs represent a prototypical expression of emotion, the emotion-related expressions are not part of the FACS (Kanade, Cohn, and Tian (2000)). The FACS itself is purely descriptive and does not include inferential labels.
dta <- data.table(
Emotion = c("fear","sadness","surprise","surprise","anger","disgust","sadness",
"fear","surprise","disgust","joy","anger","disgust","disgust",
"joy","sadness","anger","disgust","sadness","anger","fear","fear"),
AU = c("AU 1","AU 1","AU 1","AU 2","AU 4","AU 4","AU 4","AU 5",
"AU 5","AU 6","AU 6","AU 7","AU 9","AU 10","AU 12","AU 15",
"AU 17","AU 17","AU 17","AU 23","AU 25","AU 26")
)
hchart(data_to_sankey(dta), "sankey", name = "Emotions and AUs",
nodes = list(list(id = 'fear' , color = "#00008B"),
list(id = 'sadness' , color = "#778899"),
list(id = 'surprise' , color = "#FFA500"),
list(id = 'anger' , color = "#FF0000"),
list(id = 'disgust' , color = "#808000"),
list(id = 'joy' , color = "#FFD700")))%>%
hc_title(text= "Sankey Diagram") %>%
hc_subtitle(text= "Action Units and Emotions") %>%
hc_caption(text = "<b>based on Ekman and Friesen (1978).<b>")%>%
hc_add_theme(hc_theme_smpl())
2 Method
2.1 Validation of software
The Attention Tool FACET Module (FACET, iMotions), which is a face and AU detection software based on the FACS. This software can track and quantify changes in AUs frame by frame and was validated in studies comparing with human coders (Krumhuber et al. (2021)) and comparing with facial Electromyography (EMG) recording (Kulke, Feyerabend, and Schacht (2020)). Before the application of the software, it was evaluated using the images from the extended Cohn-Kanade Facial Expression Database (CK+, Lucey et al. (2010)). Below are the validation results.
kbl(dtck, escape = F) %>%
kable_paper(full_width = F, bootstrap_options = c("striped", "hover", "condensed", "responsive"))%>%
column_spec(5, image = spec_image(
c("https://imotions.com/wp-content/uploads/2022/10/AU1-FACS.gif",
"https://imotions.com/wp-content/uploads/2022/10/AU2-right-only.gif",
"https://imotions.com/wp-content/uploads/2022/10/AU4-brow-lowerer.gif",
"https://imotions.com/wp-content/uploads/2022/10/AU5.gif",
"https://imotions.com/wp-content/uploads/2022/10/AU6-cheek-raiser.gif",
"https://imotions.com/wp-content/uploads/2022/10/AU7-lid-tightener.gif",
"https://imotions.com/wp-content/uploads/2022/10/AU9-with-410.gif",
"https://imotions.com/wp-content/uploads/2022/10/AU10-with-25.gif",
"https://imotions.com/wp-content/uploads/2022/10/AU12.gif",
"https://imotions.com/wp-content/uploads/2022/10/AU14-dimpler.gif",
"https://imotions.com/wp-content/uploads/2022/10/AU15.gif",
"https://imotions.com/wp-content/uploads/2022/10/AU17.gif",
"https://imotions.com/wp-content/uploads/2022/10/AU18-with-22A-and-25A.gif",
"https://imotions.com/wp-content/uploads/2022/10/AU20-lip-stretcher.gif",
"https://imotions.com/wp-content/uploads/2022/10/AU23-lip-tightener.gif",
"https://imotions.com/wp-content/uploads/2022/10/AU24.gif",
"https://imotions.com/wp-content/uploads/2022/10/AU25-lips-part.gif",
"https://imotions.com/wp-content/uploads/2022/10/AU26-with-25.gif",
"https://imotions.com/wp-content/uploads/2022/10/AU28-with-26.gif"), 150, 70))
| AU | Bedeutung | Acc. | N | URL (source: iMotions) |
|---|---|---|---|---|
| 1 | Heben der inneren Augenbraue | 89.7 % | 175 |
|
| 2 | Heben der äußeren Augenbraue | 88.9 % | 117 |
|
| 4 | Senken der Brauen | 94.3 % | 194 |
|
| 5 | Heben der Oberlider | 95.1 % | 102 |
|
| 6 | Heben der Wangen | 92.7 % | 123 |
|
| 7 | Spannen der Lider | 93.4 % | 121 |
|
| 9 | Nase rümpfen | 100 % | 75 |
|
| 10 | Heben der Oberlippen | 90.5 % | 21 |
|
| 12 | Heben der Mundwinkel | 95.4 % | 131 |
|
| 14 | Grübchen | 67.6 % | 37 |
|
| 15 | Mundwinkel senken | 89.4 % | 94 |
|
| 17 | Kinn anheben | 86.6 % | 202 |
|
| 18 | Lippen spitzen | 88.9 % | 9 |
|
| 20 | Lippen dehnen | 92.4 % | 79 |
|
| 23 | Lippen spannen | 63.3 % | 60 |
|
| 24 | Lippen zusammenpressen | 65.5 % | 58 |
|
| 25 | Öffnen des Mundes | 76.9 % | 324 |
|
| 26 | Unterkiefer fallen lassen | 48 % | 50 |
|
| 28 | Lippen einsaugen | 100 % | 1 |
|
2.2 Experiment Design
the participants drove twelve driving simulator scenarios on a
two-lane urban road. Frustration was induced by a combination of time
pressure and obstacles (Frust). In three scenarios, the participants had
almost free driving in moderate traffic (noFrust).